12 research outputs found

    A Advanced Telerobotic Control System for a Mobile Robot with Multisensor Feedback

    No full text
    . This paper presents an advanced telerobotic control system for a mobile robot with multisensor feedback. A telecontrol concept for various degrees of cooperation between a human operator and a mobile robot is described. With multisensor on-board the robot at the remote site can adjust its path while continuously accepting commands from the human operator. Interactive modelling that allows the modelling of an unknown environment and makes landmarks known to the robot is introduced. A graphical user interface and a 3-D animation system are important elements in the teleoperation, they are integrated in this system to help the operator by task analysis, offline teaching and on-line monitoring. Experiments performed with the mobile robot PRIAMOS are discussed. Key Words. Mobile Robot, Telerobotics, Multisensor, Man-Machine Systems 1. INTRODUCTION Telerobotic has been an active research field in robotics for many years (Sheridan, 1992). Many researches applied this technique in outer spa..

    A Hidden Markov Model Based Sensor Fusion Approach For Recognizing . . .

    No full text
    The Programming by Demonstration (PbD) technique aims at teaching a robot to accomplish a task by learning from a human demonstration. In a manipulation context, recognizing the demonstrator's hand gestures, specifically when and how objects are grasped, plays a significant role. Here, a system is presented that uses both hand shape and contact point information obtained from a data glove and tactile sensors to recognize continuous human grasp sequences. The sensor fusion, grasp classification and task segmentation are made by a Hidden Markov Model recognizer that distinguishes 14 grasp types, as presented in Kamakura's taxonomy. An accuracy of up to 92.2% for a single user system, and 90.9% for a multiple user system could be achieved

    A cognitive architecture for a humanoid robot: A first approach

    No full text
    Abstract — Future life pictures humans having intelligent humanoid robotic systems taking part in their everyday life. Thus researchers strive to supply robots with an adequate artificial intelligence in order to achieve a natural and intuitive interaction between human being and robotic system. Within the German Humanoid Project we focus on learning and cooperating multimodal robotic systems. In this paper we present a first cognitive architecture for our humanoid robot: The architecture is a mixture of a hierarchical three-layered form on the one hand and a composition of behaviour-specific modules on the other hand. Perception, learning, planning of actions, motor control, and human-like communication play an important role in the robotic system and are embedded step by step in our architecture. I

    Embodied Neuromorphic Vision with Continuous Random Backpropagation

    Get PDF
    Spike-based communication between biological neurons is sparse and unreliable. This enables the brain to process visual information from the eyes efficiently. Taking inspiration from biology, artificial spiking neural networks coupled with silicon retinas attempt to model these computations. Recent findings in machine learning allowed the derivation of a family of powerful synaptic plasticity rules approximating backpropagation for spiking networks. Are these rules capable of processing real-world visual sensory data? In this paper, we evaluate the performance of Event-Driven Random Back-Propagation (eRBP) at learning representations from event streams provided by a Dynamic Vision Sensor (DVS). First, we show that eRBP matches state-of-the-art performance on the DvsGesture dataset with the addition of a simple covert attention mechanism. By remapping visual receptive fields relatively to the center of the motion, this attention mechanism provides translation invariance at low computational cost compared to convolutions. Second, we successfully integrate eRBP in a real robotic setup, where a robotic arm grasps objects according to detected visual affordances. In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements. We show that our method classifies affordances within 100ms after microsaccade onset, which is comparable to human performance reported in behavioral study. Our results suggest that advances in neuromorphic technology and plasticity rules enable the development of autonomous robots operating at high speed and low energy consumption

    Integration of 6D object localization and obstacle detection for collision free robotic manipulation

    No full text
    Abstract — The major goal of research regarding mobile service robotics is to enable a robot to assist human beings in their everyday life. This implies that the robot will have to deal with everyday life environments. One of the most important steps towards able service robots is to enhance the ability to operate well in unstructured living environments. In this paper we focus on the integration of object recognition, obstacle detection and collision free manipulation to increase the service robots manipulation abilities in the context of highly unstructured environments

    INPRES (intraoperative presentation of surgical planning and simulation results) – augmented reality for craniofacial surgery

    No full text
    In this paper we present recent developments and pre-clinical validation results of our approach for augmented reality (AR, for short) in craniofacial surgery. A commercial Sony Glasstron display is used for optical see-through overlay of surgical planning and simulation results with a patient inside the operation room (OR). For the tracking of the glasses, of the patient and of various medical instruments an NDI Polaris system is used as standard solution. A complementary inside-out navigation approach has been realized with a panoramic camera. This device is mounted on the head of the surgeon for tracking of fiducials placed on the walls of the OR. Further tasks described include the calibration of the head-mounted display (HMD), the registration of virtual objects with the real world and the detection of occlusions in the object overlay with help of two miniature CCD cameras. The evaluation of our work took place in the laboratory environment and showed promising results. Future work will concentrate on the optimization of the technical features of the prototype and on the development of a system for everyday clinical use
    corecore